ShiftDelete.Net Global

Can AI be used to cheat on multiple-choice exams? Florida professor uncovers the truth

Ana sayfa / News

A Florida State University professor has uncovered a method to detect whether students are using generative AI, like ChatGPT, to cheat on multiple-choice exams. This discovery opens up new possibilities for educators concerned about the implications of AI in academic settings.

Since the launch of OpenAI’s ChatGPT in November 2022, the academic community has voiced concerns about the potential misuse of generative AI. While much of the focus has been on AI-generated essays and term papers, the possibility of cheating on multiple-choice tests with AI tools has received less attention. This gap in oversight is now being addressed.

Kenneth Hanson, a professor at Florida State University, became interested in this issue after publishing research on the outcomes of in-person versus online exams. When a peer reviewer questioned how ChatGPT might impact these outcomes, Hanson teamed up with Ben Sorenson, a machine-learning engineer at FSU, to investigate further. They collected data over five semesters, analyzing nearly 1,000 exam questions. Their findings, published this summer, revealed a distinct pattern: ChatGPT correctly answered most of the “difficult” questions but often got the “easy” ones wrong. This pattern allowed Hanson and his team to identify AI-generated answers with nearly 100 percent accuracy.

Apple event September 2024: What to expect from the iPhone 16 keynote

Apple's September 2024 keynote event will include the iPhone 16, new Apple Watches, updates to AirPods, and updates on Apple Intelligence.

Hanson explained that “ChatGPT isn’t designed to be a right-answer generator but rather an answer generator”. The way students approach problems is different from how AI processes them, leading to errors on simpler questions. This revelation raises important questions about the effectiveness of using AI like ChatGPT to create or answer multiple-choice tests.

For students considering using ChatGPT to cheat on multiple-choice exams, the process involves typing each question and its possible answers into the AI. If no proctoring software is in place, students could copy and paste questions directly into their browsers during exams. However, Victor Lee, faculty lead of AI and education at Stanford University’s Accelerator for Learning, believes this extra step might discourage students from using AI for cheating. Lee suggests that students typically seek the simplest solutions, and in the context of multiple-choice exams, they might not find AI tools to be the most efficient option.

Despite the effectiveness of Hanson’s detection method, he acknowledges that it may not be practical for individual professors to implement. The process requires running answers through his program multiple times, which could be too time-consuming for everyday use. Hanson pointed out that while the method works well, the effort required to catch a small percentage of students might not be worth it for individual educators.

Hanson sees the potential for his method to be applied on a larger scale by proctoring companies like Data Recognition Corporation and ACT. These organizations, with their extensive data access, could implement the technology to monitor AI-driven cheating globally. ACT, however, stated that it is not currently adopting generative AI detection but continues to evaluate and improve its security methods.

Turnitin, a major player in AI-detection technology, has not yet developed a product to track multiple-choice cheating. The company, however, offers software that ensures reliable digital exam experiences, indicating the ongoing need for robust assessment tools in an era increasingly influenced by AI.

Looking forward, Hanson plans to shift his research focus to understanding why ChatGPT gets certain questions wrong when students get them right. This insight could help educators design more effective exams and better understand the limitations of AI in academic settings.

For now, concerns about AI-assisted cheating on essays remain more pressing for many educators. However, as universities begin to implement AI-focused policies, some of these concerns may begin to ease. Lee believes that instead of trying to block AI’s integration into education, institutions should focus on adapting their teaching and assessment methods to coexist with this new technology. Change requires effort, but Lee suggests that the key question should be how to effectively incorporate AI into educational practices rather than resisting its inevitable influence.

Yorum Ekleyin